要识别蒙面的脸,可能的解决方案之一可能是首先恢复面部的遮挡部分,然后应用面部识别方法。受到最新图像介绍方法的启发,我们提出了一个端到端杂交遮罩的面部识别系统,即HIMFR,由三个重要部分组成:遮罩的面部探测器,脸部涂上涂料和脸部识别。蒙面的面部检测器模块应用了预验证的视觉变压器(VIT \ _B32),以检测面部是否被掩盖覆盖。该模块使用基于生成对抗网络(GAN)的微调图像插入模型来恢复面部。最后,基于VIT的混合面部识别模块具有有效的NETB3骨架,可以识别面部。我们已经在四个不同的公开数据集上实施并评估了我们提出的方法:Celeba,ssdmnv2,mafa,{bubfig83}与我们本地收集的小数据集,即面对5。全面的实验结果表明,提出的HIMFR方法具有竞争性能的功效。代码可从https://github.com/mdhosen/himfr获得
translated by 谷歌翻译
具有较高纹理区域(例如去除面罩)的逼真的图像恢复具有挑战性。最新的基于深度学习的方法无法保证高保真性,由于梯度问题消失(例如,在初始层中略有更新)和空间信息损失而导致训练不稳定。它们还取决于中介阶段,例如分割含义需要外部掩码。本文提出了一种使用残留注意的盲面膜面孔涂底漆方法,以清除面膜并用细节恢复面部,同时用地面真相面部结构最大程度地减少差距。残留的块将信息馈送到下一层,直接进入大约两个啤酒花的图层,以解决梯度消失的问题。此外,注意单元帮助模型专注于相关的面具区域,减少资源并更快地使模型。公开可用的Celeba数据集的广泛实验显示了我们提出的模型的可行性和鲁棒性。代码可在\ url {https://github.com/mdhosen/mask-face-inpainting-using-isidual-cription-unet}中获得。
translated by 谷歌翻译
在这个大数据时代,当前一代很难从在线平台中包含的大量数据中找到正确的数据。在这种情况下,需要一个信息过滤系统,可以帮助他们找到所需的信息。近年来,出现了一个称为推荐系统的研究领域。推荐人变得重要,因为他们拥有许多现实生活应用。本文回顾了推荐系统在电子商务,电子商务,电子资源,电子政务,电子学习和电子生活中的不同技术和发展。通过分析有关该主题的最新工作,我们将能够详细概述当前的发展,并确定建议系统中的现有困难。最终结果为从业者和研究人员提供了对建议系统及其应用的必要指导和见解。
translated by 谷歌翻译
最近在灾害信息学的研究证明了人工智能的实用而重要的用例,以拯救人类生命和基于社交媒体内容(文本和图像)的自然灾害期间的痛苦。虽然使用文本的显着进度,但利用图像的研究仍然相对较低。要提前基于图像的方法,我们提出了Medic(可用于:https://crisisnlp.qcri.org/medic/index.html),这是人道主义响应的最大社交媒体图像分类数据集,由71,198个图像组成在多任务学习设置中的四个不同任务。这是它的第一个数据集:社交媒体图像,灾难响应和多任务学习研究。该数据集的一个重要属性是它的高潜力,可以为多任务学习进行贡献,该研究最近从机器学习界获得了很多兴趣,并在内存,推理速度,性能和泛化能力方面显示出显着的结果。因此,所提出的数据集是用于推进基于图像的灾害管理和多任务机器学习研究的重要资源。
translated by 谷歌翻译
Cancer is one of the most challenging diseases because of its complexity, variability, and diversity of causes. It has been one of the major research topics over the past decades, yet it is still poorly understood. To this end, multifaceted therapeutic frameworks are indispensable. \emph{Anticancer peptides} (ACPs) are the most promising treatment option, but their large-scale identification and synthesis require reliable prediction methods, which is still a problem. In this paper, we present an intuitive classification strategy that differs from the traditional \emph{black box} method and is based on the well-known statistical theory of \emph{sparse-representation classification} (SRC). Specifically, we create over-complete dictionary matrices by embedding the \emph{composition of the K-spaced amino acid pairs} (CKSAAP). Unlike the traditional SRC frameworks, we use an efficient \emph{matching pursuit} solver instead of the computationally expensive \emph{basis pursuit} solver in this strategy. Furthermore, the \emph{kernel principal component analysis} (KPCA) is employed to cope with non-linearity and dimension reduction of the feature space whereas the \emph{synthetic minority oversampling technique} (SMOTE) is used to balance the dictionary. The proposed method is evaluated on two benchmark datasets for well-known statistical parameters and is found to outperform the existing methods. The results show the highest sensitivity with the most balanced accuracy, which might be beneficial in understanding structural and chemical aspects and developing new ACPs. The Google-Colab implementation of the proposed method is available at the author's GitHub page (\href{https://github.com/ehtisham-Fazal/ACP-Kernel-SRC}{https://github.com/ehtisham-fazal/ACP-Kernel-SRC}).
translated by 谷歌翻译
Medical professionals frequently work in a data constrained setting to provide insights across a unique demographic. A few medical observations, for instance, informs the diagnosis and treatment of a patient. This suggests a unique setting for meta-learning, a method to learn models quickly on new tasks, to provide insights unattainable by other methods. We investigate the use of meta-learning and robustness techniques on a broad corpus of benchmark text and medical data. To do this, we developed new data pipelines, combined language models with meta-learning approaches, and extended existing meta-learning algorithms to minimize worst case loss. We find that meta-learning on text is a suitable framework for text-based data, providing better data efficiency and comparable performance to few-shot language models and can be successfully applied to medical note data. Furthermore, meta-learning models coupled with DRO can improve worst case loss across disease codes.
translated by 谷歌翻译
Fine-grained population maps are needed in several domains, like urban planning, environmental monitoring, public health, and humanitarian operations. Unfortunately, in many countries only aggregate census counts over large spatial units are collected, moreover, these are not always up-to-date. We present POMELO, a deep learning model that employs coarse census counts and open geodata to estimate fine-grained population maps with 100m ground sampling distance. Moreover, the model can also estimate population numbers when no census counts at all are available, by generalizing across countries. In a series of experiments for several countries in sub-Saharan Africa, the maps produced with POMELOare in good agreement with the most detailed available reference counts: disaggregation of coarse census counts reaches R2 values of 85-89%; unconstrained prediction in the absence of any counts reaches 48-69%.
translated by 谷歌翻译
The emergence of COVID-19 has had a global and profound impact, not only on society as a whole, but also on the lives of individuals. Various prevention measures were introduced around the world to limit the transmission of the disease, including face masks, mandates for social distancing and regular disinfection in public spaces, and the use of screening applications. These developments also triggered the need for novel and improved computer vision techniques capable of (i) providing support to the prevention measures through an automated analysis of visual data, on the one hand, and (ii) facilitating normal operation of existing vision-based services, such as biometric authentication schemes, on the other. Especially important here, are computer vision techniques that focus on the analysis of people and faces in visual data and have been affected the most by the partial occlusions introduced by the mandates for facial masks. Such computer vision based human analysis techniques include face and face-mask detection approaches, face recognition techniques, crowd counting solutions, age and expression estimation procedures, models for detecting face-hand interactions and many others, and have seen considerable attention over recent years. The goal of this survey is to provide an introduction to the problems induced by COVID-19 into such research and to present a comprehensive review of the work done in the computer vision based human analysis field. Particular attention is paid to the impact of facial masks on the performance of various methods and recent solutions to mitigate this problem. Additionally, a detailed review of existing datasets useful for the development and evaluation of methods for COVID-19 related applications is also provided. Finally, to help advance the field further, a discussion on the main open challenges and future research direction is given.
translated by 谷歌翻译
Low-rank and sparse decomposition based methods find their use in many applications involving background modeling such as clutter suppression and object tracking. While Robust Principal Component Analysis (RPCA) has achieved great success in performing this task, it can take hundreds of iterations to converge and its performance decreases in the presence of different phenomena such as occlusion, jitter and fast motion. The recently proposed deep unfolded networks, on the other hand, have demonstrated better accuracy and improved convergence over both their iterative equivalents as well as over other neural network architectures. In this work, we propose a novel deep unfolded spatiotemporal RPCA (DUST-RPCA) network, which explicitly takes advantage of the spatial and temporal continuity in the low-rank component. Our experimental results on the moving MNIST dataset indicate that DUST-RPCA gives better accuracy when compared with the existing state of the art deep unfolded RPCA networks.
translated by 谷歌翻译
Virtual reality (VR) over wireless is expected to be one of the killer applications in next-generation communication networks. Nevertheless, the huge data volume along with stringent requirements on latency and reliability under limited bandwidth resources makes untethered wireless VR delivery increasingly challenging. Such bottlenecks, therefore, motivate this work to seek the potential of using semantic communication, a new paradigm that promises to significantly ease the resource pressure, for efficient VR delivery. To this end, we propose a novel framework, namely WIreless SEmantic deliveRy for VR (WiserVR), for delivering consecutive 360{\deg} video frames to VR users. Specifically, deep learning-based multiple modules are well-devised for the transceiver in WiserVR to realize high-performance feature extraction and semantic recovery. Among them, we dedicatedly develop a concept of semantic location graph and leverage the joint-semantic-channel-coding method with knowledge sharing to not only substantially reduce communication latency, but also to guarantee adequate transmission reliability and resilience under various channel states. Moreover, implementation of WiserVR is presented, followed by corresponding initial simulations for performance evaluation compared with benchmarks. Finally, we discuss several open issues and offer feasible solutions to unlock the full potential of WiserVR.
translated by 谷歌翻译